摘要 :
This two-part paper investigates cache replacement schemes with the objective of developing a general model to unify the analysis of various replacement schemes and illustrate their features. To achieve this goal, we study the dyn...
展开
This two-part paper investigates cache replacement schemes with the objective of developing a general model to unify the analysis of various replacement schemes and illustrate their features. To achieve this goal, we study the dynamic process of caching in the vector space and introduce the concept of state transition field (STF) to model and characterize replacement schemes. In the first part of this work, we consider the case of time-invariant content popularity based on the independent reference model (IRM). In such case, we demonstrate that the resulting STFs are static, and each replacement scheme leads to a unique STF. The STF determines the expected trace of the dynamic change in the cache state distribution, as a result of content requests and replacements, from any initial point. Moreover, given the replacement scheme, the STF is only determined by the content popularity. Using four example schemes including random replacement (RR) and least recently used (LRU), we show that the STF can be used to analyze replacement schemes such as finding their steady states, highlighting their differences, and revealing insights regarding the impact of knowledge of content popularity. Based on the above results, STF is shown to be useful for characterizing and illustrating replacement schemes. Extensive numeric results are presented to demonstrate analytical STFs and STFs from simulations for the considered example replacement schemes.
收起
摘要 :
Hardware cache behavior is an important factor in the performance of memory-resident, data-intensive systems such as XML filtering engines. A key data structure in several recent XML filters is the automaton, which is used to repr...
展开
Hardware cache behavior is an important factor in the performance of memory-resident, data-intensive systems such as XML filtering engines. A key data structure in several recent XML filters is the automaton, which is used to represent the long-running XML queries in the main memory. In this paper, we study the cache performance of automaton-based XML filtering through analytical modeling and system measurement. Furthermore, we propose a cache-conscious automaton organization technique, called the hot buffer, to improve the locality of automaton state transitions. Our results show that 1) our cache performance model for XML filtering automata is highly accurate and 2) the hot buffer improves the cache performance as well as the overall performance of automaton-based XML filtering
收起